Handling Missing Data via Max-Entropy Regularized Graph Autoencoder
نویسندگان
چکیده
Graph neural networks (GNNs) are popular weapons for modeling relational data. Existing GNNs not specified attribute-incomplete graphs, making missing attribute imputation a burning issue. Until recently, many works notice that coupled with spectral concentration, which means the spectrum obtained by concentrates on local part in domain, e.g., low-frequency due to oversmoothing As consequence, may be seriously flawed reconstructing graph attributes as concentration tends cause low precision. In this work, we present regularized autoencoder imputation, named MEGAE, aims at mitigating problem maximizing entropy. Notably, first method estimating entropy without eigen-decomposition of Laplacian matrix and provide theoretical upper error bound. A maximum regularization then acts latent space, directly increases Extensive experiments show MEGAE outperforms all other state-of-the-art methods variety benchmark datasets.
منابع مشابه
Adversarially Regularized Graph Autoencoder
Graph embedding is an eective method to represent graph data in a low dimensional space for graph analytics. Most existing embedding algorithms typically focus on preserving the topological structure or minimizing the reconstruction errors of graph data, but they have mostly ignored the data distribution of the latent codes from the graphs, which oen results in inferior embedding in real-worl...
متن کاملHandling Missing Values with Regularized Iterative Multiple Correspondence Analysis
A common approach to deal with missing values in multivariate exploratory data analysis consists in minimizing the loss function over all non-missing elements. This can be achieved by EM-type algorithms where an iterative imputation of the missing values is performed during the estimation of the axes and components. This paper proposes such an algorithm, named iterative multiple correspondence ...
متن کاملSubspace Clustering via Graph Regularized Sparse Coding
Sparse coding has gained popularity and interest due to the benefits of dealing with sparse data, mainly space and time efficiencies. It presents itself as an optimization problem with penalties to ensure sparsity. While this approach has been studied in the literature, it has rarely been explored within the confines of clustering data. It is our belief that graph-regularized sparse coding can ...
متن کاملMarginalized Denoising Autoencoder via Graph Regularization for Domain Adaptation
Domain adaptation, which aims to learn domain-invariant features for sentiment classification, has received increasing attention. The underlying rationality of domain adaptation is that the involved domains share some common latent factors. Recently neural network based on Stacked Denoising Auto-Encoders (SDA) and its marginalized version (mSDA) have shown promising results on learning domain-i...
متن کاملMissing Data Handling in Multi-Layer Perceptron
Multi layer perceptron with back propagation algorithm is popular and more used than other neural network types in various fields of investigation as a non-linear predictor. Though MLP can solve complex and non-linear problems, it cannot use missing data for training directly. We propose a training algorithm with incomplete pattern data using conventional MLP network. Focusing on the fact that ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence
سال: 2023
ISSN: ['2159-5399', '2374-3468']
DOI: https://doi.org/10.1609/aaai.v37i6.25928